Summary of week4 study
In week4, I have studied how to use linear discriminant analysis (LDA) to classify a categrical target, and how to use k-means to cluster the samples based on their multivariant observations from DataCamp;
A few more keywords for this week are: covariance matrix, correlation matrix, training/test dataset and Euclidean distance
## Loading packages
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(tidyr)
library(ggplot2)
library(GGally)
##
## Attaching package: 'GGally'
## The following object is masked from 'package:dplyr':
##
## nasa
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
In this week, we will use Boston data fom MASS to explore linear discriminant analysis (LDA) and cluster analysis. Boston data frame is about crime rate and its ralated information of Boston, USA:
crim: perl capita crime rate by townzn: proportion of residential land zoned for lots over 25,000 sq.ftindus: proportion of non-retail business acres per town.chas: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).nox: nitrogen oxides concentration (parts per 10 million).rm: average number of rooms per dwelling.age: proportion of owner-occupied units built prior to 1940.dis: weighted mean of distances to five Boston employment centres.rad: index of accessibility to radial highways.tax: full-value property-tax rate per $10,000.ptratio: pupil-teacher ratio by town.black: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.lstat: lower status of the population (percent).medv: median value of owner-occupied homes in $1000s.More detailed data description could be found here.
## Reading data to alc
data('Boston')
dim(Boston)
## [1] 506 14
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
Boston has 506 rows and 14 columns. chas and rad are integers, and other variables are float numbers.
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
From the summary table, we can find the minimun, maximum, mean, median and quantiles for each variable. Also, the observations are in difference scales. We will need to standarlize them by corresponding means and standard deviations(stds).
ggpairs(Boston, mapping = aes(col='tomato',alpha=0.05),
lower = list(combo =wrap("facethist", bins = 30)),
upper = list(continuous = wrap("cor", size = 2.5)))
rm) looks like normally distributed, but it is not the case for the remaining observatiopns.rad, tax and lstat are the highest three variables which positively associated with crim. In contrast, medv and dis are the top two negetive associated variables with the target.tax vs rad (0.91), idus vs nox (0.76), age vs nox (0.73), zn vs dis (0.66), rm vs medv (0.69), medv vs lstat (-0.73), dis vs age (-0.74), dis vs nox (-0.76) and lstat vs rm (-0.61).## standarlize Boston
boston_scaled <- scale (Boston)
## the above data is matrix, transform it to data.frame
boston_scaled <- as.data.frame(boston_scaled)
summary (boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
The data is standarlized by mean and std of each variable. As we can see, the means of the new data are zero for all of them and also are in the same scale now.
## create a quantile vector of crim from the scaled data
bins <- quantile(boston_scaled$crim)
## create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, label = c("low", "med_low", "med_high", "high"), include.lowest = TRUE)
## drop old crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
## add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
## how many rows in the scaled data
n <- nrow(boston_scaled)
## set seed to repeat the random sampling
set.seed(1111)
## randomly sample n indices between (0,n]
ind <- sample(n, size = n)
## use the top 80% of ramndomly sampled indices as training data
train <- boston_scaled[head(ind,n=0.8*n),]
dim (train)
## [1] 404 14
## use the tail 20% of ramndomly sampled indices as test data
test <- boston_scaled[tail(ind,n=0.2*n),]
dim (test)
## [1] 102 14
# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "pink", tex = 1, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot.new()
plot(lda.fit, dimen = 2,col=classes, pch=classes)
lda.arrows(lda.fit, myscale = 2)
## save the correct classes from test data
correct_classes <- test$crime
## remove the crime variable from test data
test <- dplyr::select(test, -crime)
## predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 18 10 0 0
## med_low 7 14 3 0
## med_high 1 14 9 2
## high 0 0 0 24
From the above prediction result, we can conclude that our model is able to predict the high crime category with 100% accuracy, followed by med_low and med_high, with 69% (23/(23+3+7)) and 68% (15/(15+4+3)) accuracy. For the low crime rate, the model could make around 50% correct prediction [16/(16+15)]. Overall, the successful rate of our model is (15+23+15+16)/102 = 67.6%, and the error rate is 1-67.6% = 32.4%.
## reload the data
data("Boston")
## rescale the data
boston_scaledNew <- scale(Boston)
boston_scaledNew <- as.data.frame(boston_scaledNew)
## calculate the distance matrix with default 'Euclidean distance' method.
boston_scaledNew_dist <- dist(boston_scaledNew)
summary(boston_scaledNew_dist)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
The minimum and maximum Euclidean distance in boston_scaledNew are 0.13 and 14.39. The mean of the Euclidean distance is 4.91 between the observations.
## set seed to repeat the randomness in K-means
set.seed(1234)
# set the a max number of cluster number for the observations
k_max <- 10
# calculate the total within cluster sum of squares (WCSS)
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaledNew, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
From the above figure, we choose the n_cluster=2, because it is the elbow position which holds the ralatively lower within cluster sum of squares (WCSS) and smaller cluster number.
# k-means clustering
km <-kmeans(Boston, centers = 2)
# plot the Boston dataset with clusters
ggpairs(boston_scaledNew, mapping = aes(col=as.factor(unname(km$cluster)),alpha=0.05),
lower = list(combo =wrap("facethist", bins = 30)),
upper = list(continuous = wrap("cor", size = 2.5)))
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero